98 research outputs found
X-code: MDS array codes with optimal encoding
We present a new class of MDS (maximum distance separable) array codes of size nĂn (n a prime number) called X-code. The X-codes are of minimum column distance 3, namely, they can correct either one column error or two column erasures. The key novelty in X-code is that it has a simple geometrical construction which achieves encoding/update optimal complexity, i.e., a change of any single information bit affects exactly two parity bits. The key idea in our constructions is that all parity symbols are placed in rows rather than columns
Deterministic voting in distributed systems using error-correcting codes
Distributed voting is an important problem in reliable computing. In an N Modular Redundant (NMR) system, the N computational modules execute identical tasks and they need to periodically vote on their current states. In this paper, we propose a deterministic majority voting algorithm for NMR systems. Our voting algorithm uses error-correcting codes to drastically reduce the average case communication complexity. In particular, we show that the efficiency of our voting algorithm can be improved by choosing the parameters of the error-correcting code to match the probability of the computational faults. For example, consider an NMR system with 31 modules, each with a state of m bits, where each module has an independent computational error probability of 10^-3. In, this NMR system, our algorithm can reduce the average case communication complexity to approximately 1.0825 m compared with the communication complexity of 31 m of the naive algorithm in which every module broadcasts its local result to all other modules. We have also implemented the voting algorithm over a network of workstations. The experimental performance results match well the theoretical predictions
Coding and Scheduling for Efficient Loss-Resilient Data Broadcasting
We examine the problem of sending data to clients over a broadcast channel in a way that minimizes the expected waiting time of the clients for this data. This channel, however, is not completely reliable, and packets are occasionally lost. This poses a problem, as performance is greatly degraded by even a single packet loss. For example, one lost packet will increase our expected waiting time for an item from .75 to 2 or 167%, when sending two items with equal demands. We propose and analyze two solutions that attempt to minimize this degradation. In the first, we code packets and in the second we code packets and slightly modify our schedule. The resulting degradations are 67% for the first solution and less than 1% for the second. We conclude that using the second scheme is a very effective way to combat single packet losses, and we extend this solution to combat up to t packet losses per data item for any t [ ] k, where k is the number of packets per data item
Low-density MDS codes and factors of complete graphs
We present a class of array code of size nĂl, where l=2n or 2n+1, called B-Code. The distances of the B-Code and its dual are 3 and l-1, respectively. The B-Code and its dual are optimal in the sense that i) they are maximum-distance separable (MDS), ii) they have an optimal encoding property, i.e., the number of the parity bits that are affected by change of a single information bit is minimal, and iii) they have optimal length. Using a new graph description of the codes, we prove an equivalence relation between the construction of the B-Code (or its dual) and a combinatorial problem known as perfect one-factorization of complete graphs, thus obtaining constructions of two families of the B-Code and its dual, one of which is new. Efficient decoding algorithms are also given, both for erasure correcting and for error correcting. The existence of perfect one-factorizations for every complete graph with an even number of nodes is a 35 years long conjecture in graph theory. The construction of B-Codes of arbitrary odd length will provide an affirmative answer to the conjecture
SRC: Stable Rate Control for Streaming Media
Rate control, in conjunction with congestion control, is important and necessary to maintain both stability of overall network and high quality of individual data transfer ïŹows. In this paper, we study stable rate control algorithms for streaming data, based on control theory. We introduce various control rules to maintain both sending rate and receiver buïŹer stable. We also propose an adaptive two-state control mechanism to ensure the rate control algorithms are compatible to TCP traïŹcs. Extensive experimental results are shown to demonstrate the eïŹectiveness of the rate control algorithms
X-Code: MDS Array Codes with Optimal Encoding
We present a new class of MDS array codes of size n x n (n a prime number)
called X-Code. The X-Codes are of minimum column distance 3, namely, they can
correct either one column error or two column erasures. The key novelty in X-code is
that it has a simple geometrical construction which achieves encoding/update optimal
complexity, namely, a change of any single information bit affects exactly two parity
bits. The key idea in our constructions is that all parity symbols are placed in rows
rather than columns
Computing in the RAIN: a reliable array of independent nodes
The RAIN project is a research collaboration between Caltech and NASA-JPL on distributed computing and data-storage systems for future spaceborne missions. The goal of the project is to identify and develop key building blocks for reliable distributed systems built with inexpensive off-the-shelf components. The RAIN platform consists of a heterogeneous cluster of computing and/or storage nodes connected via multiple interfaces to networks configured in fault-tolerant topologies. The RAIN software components run in conjunction with operating system services and standard network protocols. Through software-implemented fault tolerance, the system tolerates multiple node, link, and switch failures, with no single point of failure. The RAIN-technology has been transferred to Rainfinity, a start-up company focusing on creating clustered solutions for improving the performance and availability of Internet data centers. In this paper, we describe the following contributions: 1) fault-tolerant interconnect topologies and communication protocols providing consistent error reporting of link failures, 2) fault management techniques based on group membership, and 3) data storage schemes based on computationally efficient error-control codes. We present several proof-of-concept applications: a highly-available video server, a highly-available Web server, and a distributed checkpointing system. Also, we describe a commercial product, Rainwall, built with the RAIN technology
Deterministic Voting in Distributed Systems Using Error-Correcting Codes
Distributed voting is an important problem in reliable computing. In an N
Modular Redundant (NMR) system, the N computational modules execute identical tasks
and they need to periodically vote on their current states. In this paper, we propose a
deterministic majority voting algorithm for NMR systems. Our voting algorithm uses
error-correcting codes to drastically reduce the average case communication
complexity. In particular, we show that the efficiency of our voting algorithm can be improved
by choosing the parameters of the error correcting code to match the probability of
the computational faults. For example, consider an NMR system with 31 modules,
each with a state of m bits, where each module has an independent computational
error probability of 10 to the power of minus 3. In this NMR system, our algorithm can reduce the average case communication complexity to approximately 1.0825m compared with the
communication complexity of 31m of the naive algorithm in which every module broadcasts
its local result to all other modules. We have also implemented the voting algorithm
over a network of workstations. The experimental performance results match well the
theoretical predictions
Measuring Perceptual Color Differences of Smartphone Photographs
Measuring perceptual color differences (CDs) is of great importance in modern
smartphone photography. Despite the long history, most CD measures have been
constrained by psychophysical data of homogeneous color patches or a limited
number of simplistic natural photographic images. It is thus questionable
whether existing CD measures generalize in the age of smartphone photography
characterized by greater content complexities and learning-based image signal
processors. In this paper, we put together so far the largest image dataset for
perceptual CD assessment, in which the photographic images are 1) captured by
six flagship smartphones, 2) altered by Photoshop, 3) post-processed by
built-in filters of the smartphones, and 4) reproduced with incorrect color
profiles. We then conduct a large-scale psychophysical experiment to gather
perceptual CDs of 30,000 image pairs in a carefully controlled laboratory
environment. Based on the newly established dataset, we make one of the first
attempts to construct an end-to-end learnable CD formula based on a lightweight
neural network, as a generalization of several previous metrics. Extensive
experiments demonstrate that the optimized formula outperforms 33 existing CD
measures by a large margin, offers reasonable local CD maps without the use of
dense supervision, generalizes well to homogeneous color patch data, and
empirically behaves as a proper metric in the mathematical sense. Our dataset
and code are publicly available at https://github.com/hellooks/CDNet.Comment: 10 figures, 8 tables, 14 page
- âŠ